Wednesday, May 28, 2025

Postgres 18 beta1: large server, Insert Benchmark, bad configurations

While testing Postgres 18 beta1 on a large server I used several configurations with io_workers set to values the are too large and performance suffered. The default value for it is io_workers and that appears to be a great default. Perhaps other people won't repeat my mistakes.

tl;dr

  • the default value for io_workers is 3 and that is a good value to use
  • be careful about using larger values for io_workers as the performance penalty ranges from 0% (no penalty) to 24% (too much penalty

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1. I got the source for 18 beta1 from github using the REL_18_BETA1 tag. I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config files for 18 beta1 use names like conf.diff.cx10cw${Z}_c32r128 where $Z is the value for io_workers. All of these use io_method=workers. The files are here. I repeated tests for io_workers set to 2, 4, 6, 8, 16 and 32.

The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table.

The benchmark steps are:

  • l.i0
    • insert 200 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: details

The performance summary is here.

The summary of the summary is that larger values for io_workers ...
  • increase throughput by up to 4% for the initial load (l.i0) 
  • increase throughput by up to 12% for create index (l.x)
  • decrease throughput by up to 6% for write heavy (l.i1)
  • decrease throughput by up to 16% for write heavy (l.i2)
  • decrease throughput by up to 3% for range queries, note that this step is CPU-bound
  • decrease throughput by up to 24% for point queries, note that this step is IO-bound
The summary is:
  • the initial load step (l.i0)
    • rQPS for io_workers in (4, 6, 8, 16) was (1.03, 1.03, 1.03, 1.02, 1.04) so these were slightly faster than io_workers=2.
    • rQPS for io_workers=32 was 1.00
  • the create index step (l.x)
    • rQPS for io_workers in (4, 6, 8, 16, 32) was (1.06, 1.05, 1.07, 1.12, 1.11) so these were all faster than io_workers=2.
  • the write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.99, 0.99, 0.96, 0.94)
    • for l.i2 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.84, 0.95, 0.90, 0.88, 0.88)
    • I am surprised that larger values for io_workers doesn't help here but did help during the previous steps (l.i0, l.x) which are also write heavy.
  • the range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.99, 0.99, 0.99, 0.99)
    • for qr500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.98, 0.97, 0.97)
    • for qr1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.01, 1.00, 0.99, 0.98, 0.97)
    • note that this step is usually CPU-bound for Postgres because the indexes fit in memory
  • the point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.97, 0.94, 0.90)
    • for qp500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.00, 0.98, 0.97, 0.89, 0.81)
    • for qp1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.95, 0.93, 0.86, 0.76)
    • these steps are IO-bound
For the regressions in one of the write-heavy steps (l.i2) I don't see an obvious problem in the vmstat and iostat metrics -- the amount of CPU, context switches and IO per operation have some variance there isn't difference that explains the change.

For the regressions in the point query steps (qp100, qp500, qp1000) the vmstat and iostat metrics for qp1000 help to explain the problem. Metrics that increase as io_workers increases include:
  • CPU/operation (see cpupq) has a large increase
  • context switches /operation (see cspq) has a small increase
  • iostat reads /operation (rpq) and KB read /operation (rkbpq) have small increases
Finally, average rates from iostat. These are not normalized by QPS. There aren't many differences, although rps (reads/s) is higher for io_workers=2 because throughput was higher in that case.

Legend:
* rps, wps - read /s and write /s
* rKBps, wKBps - KB read /s & KB written /s
* rawait, wawait - read & write latency
* rareqsz, wareqsz - read & write request size

-- from l.i2 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
3468    34622   0.08    8.9     5374    85567   1.41    17.3     2
2959    24026   0.08    8.3     4866    74547   0.05    17.5    32

-- from qp1000 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
81949   659030  0.13    8.0     39546   589789  168.21  16.5     2
68257   549016  0.12    8.0     36005   549028  130.44  16.2    32
 

No comments:

Post a Comment

Postgres 18 beta1: large server, Insert Benchmark, bad configurations

While testing Postgres 18 beta1 on a large server I used several configurations with io_workers set to values the are too large and performa...